{ "cells": [ { "cell_type": "markdown", "id": "0463f227", "metadata": { "origin_pos": 1 }, "source": [ "# The Transformer Architecture\n", ":label:`sec_transformer`\n", "\n", "\n", "We have compared CNNs, RNNs, and self-attention in\n", ":numref:`subsec_cnn-rnn-self-attention`.\n", "Notably, self-attention\n", "enjoys both parallel computation and\n", "the shortest maximum path length.\n", "Therefore,\n", "it is appealing to design deep architectures\n", "by using self-attention.\n", "Unlike earlier self-attention models\n", "that still rely on RNNs for input representations :cite:`Cheng.Dong.Lapata.2016,Lin.Feng.Santos.ea.2017,Paulus.Xiong.Socher.2017`,\n", "the Transformer model\n", "is solely based on attention mechanisms\n", "without any convolutional or recurrent layer :cite:`Vaswani.Shazeer.Parmar.ea.2017`.\n", "Though originally proposed\n", "for sequence-to-sequence learning on text data,\n", "Transformers have been\n", "pervasive in a wide range of\n", "modern deep learning applications,\n", "such as in areas to do with language, vision, speech, and reinforcement learning.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "ee18893c", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:50:06.687415Z", "iopub.status.busy": "2023-08-18T19:50:06.687094Z", "iopub.status.idle": "2023-08-18T19:50:09.889628Z", "shell.execute_reply": "2023-08-18T19:50:09.888444Z" }, "origin_pos": 3, "tab": [ "pytorch" ] }, "outputs": [], "source": [ "import math # Basic math helpers (e.g., square roots for scaling embeddings)\n", "import pandas as pd # DataFrame utilities for reshaping attention weights\n", "import torch # Core tensor library for tensors and autograd\n", "from torch import nn # Neural network building blocks\n", "from d2l import torch as d2l # Dive into Deep Learning PyTorch utilities\n", "\n", " \n", "import importlib\n", "import hw7\n", "importlib.reload(hw7)\n", "from hw7 import *\n", "\n", "from tsv_seq2seq_data import TSVSeq2SeqData\n", "import os" ] }, { "cell_type": "markdown", "id": "23e72a17", "metadata": { "origin_pos": 6 }, "source": [ "## Model\n", "\n", "As an instance of the encoder--decoder\n", "architecture,\n", "the overall architecture of\n", "the Transformer\n", "is presented in :numref:`fig_transformer`.\n", "As we can see,\n", "the Transformer is composed of an encoder and a decoder.\n", "In contrast to\n", "Bahdanau attention\n", "for sequence-to-sequence learning\n", "in :numref:`fig_s2s_attention_details`,\n", "the input (source) and output (target)\n", "sequence embeddings\n", "are added with positional encoding\n", "before being fed into\n", "the encoder and the decoder\n", "that stack modules based on self-attention.\n", "\n", "![The Transformer architecture.](../img/transformer.svg)\n", ":width:`320px`\n", ":label:`fig_transformer`\n", "\n", "\n", "Now we provide an overview of the\n", "Transformer architecture in :numref:`fig_transformer`.\n", "At a high level,\n", "the Transformer encoder is a stack of multiple identical layers,\n", "where each layer\n", "has two sublayers (either is denoted as $\\textrm{sublayer}$).\n", "The first\n", "is a multi-head self-attention pooling\n", "and the second is a positionwise feed-forward network.\n", "Specifically,\n", "in the encoder self-attention,\n", "queries, keys, and values are all from the\n", "outputs of the previous encoder layer.\n", "Inspired by the ResNet design of :numref:`sec_resnet`,\n", "a residual connection is employed\n", "around both sublayers.\n", "In the Transformer,\n", "for any input $\\mathbf{x} \\in \\mathbb{R}^d$ at any position of the sequence,\n", "we require that $\\textrm{sublayer}(\\mathbf{x}) \\in \\mathbb{R}^d$ so that\n", "the residual connection $\\mathbf{x} + \\textrm{sublayer}(\\mathbf{x}) \\in \\mathbb{R}^d$ is feasible.\n", "This addition from the residual connection is immediately\n", "followed by layer normalization :cite:`Ba.Kiros.Hinton.2016`.\n", "As a result, the Transformer encoder outputs a $d$-dimensional vector representation\n", "for each position of the input sequence.\n", "\n", "The Transformer decoder is also a stack of multiple identical layers\n", "with residual connections and layer normalizations.\n", "As well as the two sublayers described in\n", "the encoder, the decoder inserts\n", "a third sublayer, known as\n", "the encoder--decoder attention,\n", "between these two.\n", "In the encoder--decoder attention,\n", "queries are from the\n", "outputs of the decoder's self-attention sublayer,\n", "and the keys and values are\n", "from the Transformer encoder outputs.\n", "In the decoder self-attention,\n", "queries, keys, and values are all from the\n", "outputs of the previous decoder layer.\n", "However, each position in the decoder is\n", "allowed only to attend to all positions in the decoder\n", "up to that position.\n", "This *masked* attention\n", "preserves the autoregressive property,\n", "ensuring that the prediction only depends\n", "on those output tokens that have been generated.\n", "\n", "\n", "We have already described and implemented\n", "multi-head attention based on scaled dot products\n", "in :numref:`sec_multihead-attention`\n", "and positional encoding in :numref:`subsec_positional-encoding`.\n", "In the following, we will implement\n", "the rest of the Transformer model.\n", "\n", "## [**Positionwise Feed-Forward Networks**]\n", ":label:`subsec_positionwise-ffn`\n", "\n", "The positionwise feed-forward network transforms\n", "the representation at all the sequence positions\n", "using the same MLP.\n", "This is why we call it *positionwise*.\n", "In the implementation below,\n", "the input `X` with shape\n", "(batch size, number of time steps or sequence length in tokens,\n", "number of hidden units or feature dimension)\n", "will be transformed by a two-layer MLP into\n", "an output tensor of shape\n", "(batch size, number of time steps, `ffn_num_outputs`).\n" ] }, { "cell_type": "markdown", "id": "623f67ee", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:50:09.894092Z", "iopub.status.busy": "2023-08-18T19:50:09.893416Z", "iopub.status.idle": "2023-08-18T19:50:09.899737Z", "shell.execute_reply": "2023-08-18T19:50:09.898347Z" }, "origin_pos": 8, "tab": [ "pytorch" ] }, "source": [ "class PositionWiseFFN(nn.Module): #@save\n", " \"\"\"The positionwise feed-forward network.\"\"\"\n", " def __init__(self, ffn_num_hiddens, ffn_num_outputs):\n", " super().__init__()\n", " # First linear layer expands features per position without mixing time steps\n", " self.dense1 = nn.LazyLinear(ffn_num_hiddens)\n", " # Non-linearity lets the model learn richer feature interactions\n", " self.relu = nn.ReLU()\n", " # Second linear layer projects the features back to the model dimension\n", " self.dense2 = nn.LazyLinear(ffn_num_outputs)\n", "\n", " def forward(self, X):\n", " # Apply the two-layer MLP to every time step independently\n", " return self.dense2(self.relu(self.dense1(X)))\n" ] }, { "cell_type": "markdown", "id": "cfc40d5b", "metadata": { "origin_pos": 11 }, "source": [ "The following example\n", "shows that [**the innermost dimension\n", "of a tensor changes**] to\n", "the number of outputs in\n", "the positionwise feed-forward network.\n", "Since the same MLP transforms\n", "at all the positions,\n", "when the inputs at all these positions are the same,\n", "their outputs are also identical.\n" ] }, { "cell_type": "code", "execution_count": 26, "id": "c462f39f", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:50:09.906345Z", "iopub.status.busy": "2023-08-18T19:50:09.905327Z", "iopub.status.idle": "2023-08-18T19:50:09.920436Z", "shell.execute_reply": "2023-08-18T19:50:09.919542Z" }, "origin_pos": 13, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "tensor([[-0.0448, -0.2349, 0.5250, -0.1190, -0.2888, 0.0365, -0.1189, 0.2713],\n", " [-0.0448, -0.2349, 0.5250, -0.1190, -0.2888, 0.0365, -0.1189, 0.2713],\n", " [-0.0448, -0.2349, 0.5250, -0.1190, -0.2888, 0.0365, -0.1189, 0.2713]],\n", " grad_fn=)" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ffn = PositionWiseFFN(4, 8) # Instantiate an FFN that maps 4-D inputs to 8-D outputs\n", "ffn.eval() # Switch to eval mode so LazyLinear layers initialize without dropout noise\n", "ffn(torch.ones((2, 3, 4)))[0] # Run a dummy batch to materialize the parameters\n" ] }, { "cell_type": "markdown", "id": "04e047d8", "metadata": { "origin_pos": 16 }, "source": [ "## Residual Connection and Layer Normalization\n", "\n", "Now let's focus on the \"add & norm\" component in :numref:`fig_transformer`.\n", "As we described at the beginning of this section,\n", "this is a residual connection immediately\n", "followed by layer normalization.\n", "Both are key to effective deep architectures.\n", "\n", "In :numref:`sec_batch_norm`,\n", "we explained how batch normalization\n", "recenters and rescales across the examples within\n", "a minibatch.\n", "As discussed in :numref:`subsec_layer-normalization-in-bn`,\n", "layer normalization is the same as batch normalization\n", "except that the former\n", "normalizes across the feature dimension,\n", "thus enjoying benefits of scale independence and batch size independence.\n", "Despite its pervasive applications\n", "in computer vision,\n", "batch normalization\n", "is usually empirically\n", "less effective than layer normalization\n", "in natural language processing\n", "tasks, where the inputs are often\n", "variable-length sequences.\n", "\n", "The following code snippet\n", "[**compares the normalization across different dimensions\n", "by layer normalization and batch normalization**].\n" ] }, { "cell_type": "code", "execution_count": 27, "id": "81c95717", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:50:09.926398Z", "iopub.status.busy": "2023-08-18T19:50:09.924518Z", "iopub.status.idle": "2023-08-18T19:50:09.937855Z", "shell.execute_reply": "2023-08-18T19:50:09.936657Z" }, "origin_pos": 18, "tab": [ "pytorch" ] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "layer norm: tensor([[-1.0000, 1.0000],\n", " [-1.0000, 1.0000]], grad_fn=) \n", "batch norm: tensor([[-1.0000, -1.0000],\n", " [ 1.0000, 1.0000]], grad_fn=)\n" ] } ], "source": [ "ln = nn.LayerNorm(2) # LayerNorm normalizes across features within each sample\n", "bn = nn.LazyBatchNorm1d() # BatchNorm normalizes across the batch dimension\n", "X = torch.tensor([[1, 2], [2, 3]], dtype=torch.float32) # Toy batch to highlight behavior differences\n", "# Compute mean and variance from X in the training mode\n", "print('layer norm:', ln(X), '\\nbatch norm:', bn(X)) # Display both normalizations for comparison\n" ] }, { "cell_type": "markdown", "id": "e802e7de", "metadata": { "origin_pos": 21 }, "source": [ "Now we can implement the `AddNorm` class\n", "[**using a residual connection followed by layer normalization**].\n", "Dropout is also applied for regularization.\n" ] }, { "cell_type": "markdown", "id": "331f12e2", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:50:09.941811Z", "iopub.status.busy": "2023-08-18T19:50:09.941163Z", "iopub.status.idle": "2023-08-18T19:50:09.948019Z", "shell.execute_reply": "2023-08-18T19:50:09.946884Z" }, "origin_pos": 23, "tab": [ "pytorch" ] }, "source": [ "class AddNorm(nn.Module): #@save\n", " \"\"\"The residual connection followed by layer normalization.\"\"\"\n", " def __init__(self, norm_shape, dropout):\n", " super().__init__()\n", " self.dropout = nn.Dropout(dropout) # Drop some sublayer activations for regularization\n", " self.ln = nn.LayerNorm(norm_shape) # Normalize the summed residual for stability\n", "\n", " def forward(self, X, Y):\n", " # Apply dropout to sublayer output Y, add the residual X, then normalize\n", " return self.ln(self.dropout(Y) + X)\n" ] }, { "cell_type": "markdown", "id": "f034f8c8", "metadata": { "origin_pos": 26 }, "source": [ "The residual connection requires that\n", "the two inputs are of the same shape\n", "so that [**the output tensor also has the same shape after the addition operation**].\n" ] }, { "cell_type": "code", "execution_count": 28, "id": "d3835d18", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:50:09.951769Z", "iopub.status.busy": "2023-08-18T19:50:09.951126Z", "iopub.status.idle": "2023-08-18T19:50:09.957096Z", "shell.execute_reply": "2023-08-18T19:50:09.956115Z" }, "origin_pos": 28, "tab": [ "pytorch" ] }, "outputs": [], "source": [ "add_norm = AddNorm(4, 0.5) # Expect feature dimension 4 and use 50% dropout\n", "shape = (2, 3, 4) # (batch size, time steps, features)\n", "d2l.check_shape(add_norm(torch.ones(shape), torch.ones(shape)), shape) # Residual path keeps the tensor shape\n" ] }, { "cell_type": "markdown", "id": "5d9098c5", "metadata": { "origin_pos": 31 }, "source": [ "## Encoder\n", ":label:`subsec_transformer-encoder`\n", "\n", "With all the essential components to assemble\n", "the Transformer encoder,\n", "let's start by\n", "implementing [**a single layer within the encoder**].\n", "The following `TransformerEncoderBlock` class\n", "contains two sublayers: multi-head self-attention and positionwise feed-forward networks,\n", "where a residual connection followed by layer normalization is employed\n", "around both sublayers.\n" ] }, { "cell_type": "markdown", "id": "015540bd", "metadata": { "origin_pos": 36 }, "source": [ "As we can see,\n", "[**no layer in the Transformer encoder\n", "changes the shape of its input.**]\n" ] }, { "cell_type": "code", "execution_count": 29, "id": "9aefd8d7", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:50:09.969308Z", "iopub.status.busy": "2023-08-18T19:50:09.968775Z", "iopub.status.idle": "2023-08-18T19:50:09.982374Z", "shell.execute_reply": "2023-08-18T19:50:09.981506Z" }, "origin_pos": 38, "tab": [ "pytorch" ] }, "outputs": [], "source": [ "X = torch.ones((2, 100, 24)) # Dummy batch with 100 tokens and 24 hidden units\n", "valid_lens = torch.tensor([3, 2]) # Only the first few tokens are meaningful per example\n", "encoder_blk = TransformerEncoderBlock(24, 48, 8, 0.5) # Configure one encoder block\n", "encoder_blk.eval() # Disable dropout for deterministic shape checking\n", "d2l.check_shape(encoder_blk(X, valid_lens), X.shape) # Encoder blocks preserve the input shape\n" ] }, { "cell_type": "markdown", "id": "21112538", "metadata": { "origin_pos": 41 }, "source": [ "In the following [**Transformer encoder**] implementation,\n", "we stack `num_blks` instances of the above `TransformerEncoderBlock` classes.\n", "Since we use the fixed positional encoding\n", "whose values are always between $-1$ and $1$,\n", "we multiply values of the learnable input embeddings\n", "by the square root of the embedding dimension\n", "to rescale before summing up the input embedding and the positional encoding.\n" ] }, { "cell_type": "markdown", "id": "94344475", "metadata": { "origin_pos": 46 }, "source": [ "Below we specify hyperparameters to [**create a two-layer Transformer encoder**].\n", "The shape of the Transformer encoder output\n", "is (batch size, number of time steps, `num_hiddens`).\n" ] }, { "cell_type": "code", "execution_count": 30, "id": "e09106b4", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:50:09.996457Z", "iopub.status.busy": "2023-08-18T19:50:09.996186Z", "iopub.status.idle": "2023-08-18T19:50:10.014181Z", "shell.execute_reply": "2023-08-18T19:50:10.013041Z" }, "origin_pos": 48, "tab": [ "pytorch" ] }, "outputs": [], "source": [ "encoder = TransformerEncoder(200, 24, 48, 8, 2, 0.5) # Tiny encoder for demonstration\n", "d2l.check_shape(encoder(torch.ones((2, 100), dtype=torch.long), valid_lens),\n", " (2, 100, 24)) # Output shape matches (batch, time, hidden)\n" ] }, { "cell_type": "markdown", "id": "012a99d0", "metadata": { "origin_pos": 51 }, "source": [ "## Decoder\n", "\n", "As shown in :numref:`fig_transformer`,\n", "[**the Transformer decoder\n", "is composed of multiple identical layers**].\n", "Each layer is implemented in the following\n", "`TransformerDecoderBlock` class,\n", "which contains three sublayers:\n", "decoder self-attention,\n", "encoder--decoder attention,\n", "and positionwise feed-forward networks.\n", "These sublayers employ\n", "a residual connection around them\n", "followed by layer normalization.\n", "\n", "\n", "As we described earlier in this section,\n", "in the masked multi-head decoder self-attention\n", "(the first sublayer),\n", "queries, keys, and values\n", "all come from the outputs of the previous decoder layer.\n", "When training sequence-to-sequence models,\n", "tokens at all the positions (time steps)\n", "of the output sequence\n", "are known.\n", "However,\n", "during prediction\n", "the output sequence is generated token by token;\n", "thus,\n", "at any decoder time step\n", "only the generated tokens\n", "can be used in the decoder self-attention.\n", "To preserve autoregression in the decoder,\n", "its masked self-attention\n", "specifies `dec_valid_lens` so that\n", "any query\n", "only attends to\n", "all positions in the decoder\n", "up to the query position.\n" ] }, { "cell_type": "markdown", "id": "23664727", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:50:10.017719Z", "iopub.status.busy": "2023-08-18T19:50:10.017425Z", "iopub.status.idle": "2023-08-18T19:50:10.027060Z", "shell.execute_reply": "2023-08-18T19:50:10.026020Z" }, "origin_pos": 53, "tab": [ "pytorch" ] }, "source": [] }, { "cell_type": "markdown", "id": "a9481394", "metadata": { "origin_pos": 56 }, "source": [ "To facilitate scaled dot product operations\n", "in the encoder--decoder attention\n", "and addition operations in the residual connections,\n", "[**the feature dimension (`num_hiddens`) of the decoder is\n", "the same as that of the encoder.**]\n" ] }, { "cell_type": "code", "execution_count": 31, "id": "1f487464", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:50:10.030357Z", "iopub.status.busy": "2023-08-18T19:50:10.030070Z", "iopub.status.idle": "2023-08-18T19:50:10.048172Z", "shell.execute_reply": "2023-08-18T19:50:10.046972Z" }, "origin_pos": 58, "tab": [ "pytorch" ] }, "outputs": [], "source": [ "decoder_blk = TransformerDecoderBlock(24, 48, 8, 0.5, 0) # Single decoder block example\n", "X = torch.ones((2, 100, 24)) # Dummy decoder inputs\n", "state = [encoder_blk(X, valid_lens), valid_lens, [None]] # Mock encoder outputs and cache\n", "d2l.check_shape(decoder_blk(X, state)[0], X.shape) # Decoder block also preserves shape\n" ] }, { "cell_type": "markdown", "id": "5cb8bf72", "metadata": { "origin_pos": 61 }, "source": [ "Now we [**construct the entire Transformer decoder**]\n", "composed of `num_blks` instances of `TransformerDecoderBlock`.\n", "In the end,\n", "a fully connected layer computes the prediction\n", "for all the `vocab_size` possible output tokens.\n", "Both of the decoder self-attention weights\n", "and the encoder--decoder attention weights\n", "are stored for later visualization.\n" ] }, { "cell_type": "markdown", "id": "38ebb1e7", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:50:10.051657Z", "iopub.status.busy": "2023-08-18T19:50:10.051318Z", "iopub.status.idle": "2023-08-18T19:50:10.061485Z", "shell.execute_reply": "2023-08-18T19:50:10.060579Z" }, "origin_pos": 63, "tab": [ "pytorch" ] }, "source": [] }, { "cell_type": "markdown", "id": "f91f85ae", "metadata": { "origin_pos": 66 }, "source": [ "## [**Training**]\n", "\n", "Let's instantiate an encoder--decoder model\n", "by following the Transformer architecture.\n", "Here we specify that\n", "both the Transformer encoder and the Transformer decoder\n", "have two layers using 4-head attention.\n", "As in :numref:`sec_seq2seq_training`,\n", "we train the Transformer model\n", "for sequence-to-sequence learning on the English--French machine translation dataset.\n" ] }, { "cell_type": "code", "execution_count": 32, "id": "90eb97c4", "metadata": {}, "outputs": [], "source": [ "class WDSmooth_Seq2Seq(d2l.Seq2Seq):\n", " def __init__(self, *args, weight_decay=1e-4, label_smoothing=0.1, **kwargs):\n", " super().__init__(*args, **kwargs)\n", " self.weight_decay = weight_decay\n", " self.criterion = nn.CrossEntropyLoss(ignore_index=self.tgt_pad, label_smoothing=label_smoothing)\n", "\n", " def configure_optimizers(self):\n", " return torch.optim.Adam(self.parameters(), lr=self.lr, weight_decay=self.weight_decay)\n", " \n", " def loss(self, Y_hat, Y):\n", " Y = Y.reshape(-1)\n", " Y_hat = Y_hat.reshape(-1, Y_hat.shape[-1])\n", " return self.criterion(Y_hat, Y)\n" ] }, { "cell_type": "code", "execution_count": 33, "id": "74f2da96", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:50:10.065769Z", "iopub.status.busy": "2023-08-18T19:50:10.064767Z", "iopub.status.idle": "2023-08-18T19:50:42.759965Z", "shell.execute_reply": "2023-08-18T19:50:42.758647Z" }, "origin_pos": 67, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "image/svg+xml": [ "\n", "\n", "\n", " \n", " \n", " \n", " \n", " 2025-11-27T02:28:19.482548\n", " image/svg+xml\n", " \n", " \n", " Matplotlib v3.9.1, https://matplotlib.org/\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n" ], "text/plain": [ "
" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "\n", "\n", "import gc\n", "gc.collect()\n", "torch.cuda.empty_cache()\n", "\n", "# #data = d2l.MTFraEng(batch_size=128) # Load the English-French translation dataset\n", "# data_path = os.path.expanduser('~/Dropbox/CS6140/data/sentence_pairs_large.tsv')\n", "# data = TSVSeq2SeqData(\n", "# path=data_path,\n", "# batch_size=512,\n", "# num_steps=30,\n", "# min_freq=2,\n", "# val_frac=0.05,\n", "# test_frac=0.0,\n", "# sample_percent=1,\n", "# )\n", "\n", "data = d2l.MTFraEng(batch_size=128)\n", "\n", "embed_size = 256\n", "num_hiddens = 320 \n", "num_blks = 3 \n", "dropout = 0.4 \n", "ffn_num_hiddens = 1280 \n", "num_heads = 8 \n", "lr = 0.001\n", "label_smoothing=0.2\n", "weight_decay=1e-4\n", "\n", "encoder = TransformerEncoder(\n", " len(data.src_vocab), num_hiddens, ffn_num_hiddens, num_heads,\n", " num_blks, dropout) # Source-side encoder\n", "\n", "decoder = TransformerDecoder(\n", " len(data.tgt_vocab), num_hiddens, ffn_num_hiddens, num_heads,\n", " num_blks, dropout) # Target-side decoder\n", "\n", "model = d2l.Seq2Seq(encoder, decoder, tgt_pad=data.tgt_vocab[''],lr=0.001) # Wrap encoder/decoder with training utilities\n", "\n", "model = WDSmooth_Seq2Seq(encoder, decoder, tgt_pad=data.tgt_vocab[''], lr=lr, label_smoothing=label_smoothing, weight_decay=weight_decay)\n", "\n", "\n", "trainer = d2l.Trainer(max_epochs=20, gradient_clip_val=1, num_gpus=1) # Configure trainer\n", "trainer.fit(model, data) # Launch training\n" ] }, { "cell_type": "markdown", "id": "ba393cdc", "metadata": { "origin_pos": 68 }, "source": [ "After training,\n", "we use the Transformer model\n", "to [**translate a few English sentences**] into French and compute their BLEU scores.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "06e5e238", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:50:42.765441Z", "iopub.status.busy": "2023-08-18T19:50:42.764512Z", "iopub.status.idle": "2023-08-18T19:50:42.852261Z", "shell.execute_reply": "2023-08-18T19:50:42.850805Z" }, "origin_pos": 69, "tab": [ "pytorch" ] }, "outputs": [ { "ename": "", "evalue": "", "output_type": "error", "traceback": [ "\u001b[1;31mnotebook controller is DISPOSED. \n", "\u001b[1;31mView Jupyter log for further details." ] }, { "ename": "", "evalue": "", "output_type": "error", "traceback": [ "\u001b[1;31mnotebook controller is DISPOSED. \n", "\u001b[1;31mView Jupyter log for further details." ] } ], "source": [ "# examples = ['necesito ayuda urgente .', 'ayer llovio mucho en la ciudad .', 'los ninos estan jugando en el parque .', 'ella quiere aprender a hablar ingles muy bien .', 'cuando llegara el proximo tren a madrid ?']\n", "# references = ['i need urgent help .', 'it rained a lot in the city yesterday .', 'the children are playing in the park .', 'she wants to learn to speak english very well .', 'when will the next train to madrid arrive ?']\n", "\n", "# preds, _ = model.predict_step(\n", "# data.build(examples, references), d2l.try_gpu(), data.num_steps)\n", "# for src, tgt, pred in zip(examples, references, preds):\n", "# translation = []\n", "# for token in data.tgt_vocab.to_tokens(pred):\n", "# if token == '':\n", "# break\n", "# translation.append(token)\n", "# print(f\"{src} => {' '.join(translation)} | reference: {tgt}\")\n", "\n", "\n", "examples = ['vamos .', 'me perdi .', 'esta tranquilo .', 'estoy en casa .', 'donde esta el tren ?', 'necesito ayuda urgente .',\n", " 'ayer llovio mucho en la ciudad .', 'los ninos estan jugando en el parque .', 'ella quiere aprender a hablar ingles muy bien .',\n", " 'cuando llegara el proximo tren a madrid ?']\n", "\n", "references = ['go .', 'i got lost .', 'he is calm .', 'i am at home .', 'where is the train ?',\n", " 'i need urgent help .', 'it rained a lot in the city yesterday .',\n", " 'the children are playing in the park .', 'she wants to learn to speak english very well .', 'when will the next train to madrid arrive ?']\n", "\n", "preds, _ = model.predict_step(\n", " data.build(examples, references), d2l.try_gpu(), data.num_steps)\n", "for src, tgt, pred in zip(examples, references, preds):\n", " translation = []\n", " for token in data.tgt_vocab.to_tokens(pred):\n", " if token == '':\n", " break\n", " translation.append(token)\n", " \n", " hypo = ' '.join(translation)\n", " print(f\"{src} => {hypo} | reference: {tgt} | BLEU: {d2l.bleu(hypo, tgt, k=2):.3f}\")\n" ] }, { "cell_type": "code", "execution_count": null, "id": "49e86e83", "metadata": {}, "outputs": [ { "ename": "", "evalue": "", "output_type": "error", "traceback": [ "\u001b[1;31mnotebook controller is DISPOSED. \n", "\u001b[1;31mView Jupyter log for further details." ] }, { "ename": "", "evalue": "", "output_type": "error", "traceback": [ "\u001b[1;31mnotebook controller is DISPOSED. \n", "\u001b[1;31mView Jupyter log for further details." ] } ], "source": [ "\n", "\n", "\n", "\n", "\n", "\n", "examples = ['vamos .', 'me perdi .', 'esta tranquilo .', 'estoy en casa .', 'donde esta el tren ?', 'necesito ayuda urgente .',\n", " 'ayer llovio mucho en la ciudad .', 'los ninos estan jugando en el parque .', 'ella quiere aprender a hablar ingles muy bien .',\n", " 'cuando llegara el proximo tren a madrid ?']\n", "\n", "references = ['go .', 'i got lost .', 'he is calm .', 'i am at home .', 'where is the train ?',\n", " 'i need urgent help .', 'it rained a lot in the city yesterday .',\n", " 'the children are playing in the park .', 'she wants to learn to speak english very well .', 'when will the next train to madrid arrive ?']\n", "\n", "for src, tgt in zip(examples, references):\n", " src_sentence = src.lower().split()\n", " src_tokens = [data.src_vocab[token] for token in src_sentence]\n", " pred_ids = beam_search_translate(model, src_tokens, data, beam_size=5, max_steps=40)\n", " translation = data.tgt_vocab.to_tokens(pred_ids)\n", " hypo = ' '.join(translation)\n", " print(f\"{src} => {hypo} | reference: {tgt} | BLEU: {d2l.bleu(hypo, tgt, k=2):.3f}\")\n" ] }, { "cell_type": "markdown", "id": "8ed88841", "metadata": { "origin_pos": 70 }, "source": [ "Let's [**visualize the Transformer attention weights**] when translating the final English sentence into French.\n", "The shape of the encoder self-attention weights\n", "is (number of encoder layers, number of attention heads, `num_steps` or number of queries, `num_steps` or number of key-value pairs).\n" ] }, { "cell_type": "code", "execution_count": null, "id": "e948e78a", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:50:42.856734Z", "iopub.status.busy": "2023-08-18T19:50:42.856107Z", "iopub.status.idle": "2023-08-18T19:50:42.926580Z", "shell.execute_reply": "2023-08-18T19:50:42.925245Z" }, "origin_pos": 71, "tab": [ "pytorch" ] }, "outputs": [ { "ename": "", "evalue": "", "output_type": "error", "traceback": [ "\u001b[1;31mnotebook controller is DISPOSED. \n", "\u001b[1;31mView Jupyter log for further details." ] }, { "ename": "", "evalue": "", "output_type": "error", "traceback": [ "\u001b[1;31mnotebook controller is DISPOSED. \n", "\u001b[1;31mView Jupyter log for further details." ] } ], "source": [ "_, dec_attention_weights = model.predict_step(\n", " data.build([engs[-1]], [fras[-1]]), d2l.try_gpu(), data.num_steps, True) # Keep decoder attentions for one example\n", "enc_attention_weights = torch.cat(model.encoder.attention_weights, 0) # Combine per-layer weights\n", "shape = (num_blks, num_heads, -1, data.num_steps) # Desired layout: (layers, heads, queries, keys)\n", "enc_attention_weights = enc_attention_weights.reshape(shape) # Reshape for visualization\n", "d2l.check_shape(enc_attention_weights,\n", " (num_blks, num_heads, data.num_steps, data.num_steps)) # Sanity-check dimensions\n" ] }, { "cell_type": "markdown", "id": "d068fa43", "metadata": { "origin_pos": 73 }, "source": [ "In the encoder self-attention,\n", "both queries and keys come from the same input sequence.\n", "Since padding tokens do not carry meaning,\n", "with specified valid length of the input sequence\n", "no query attends to positions of padding tokens.\n", "In the following,\n", "two layers of multi-head attention weights\n", "are presented row by row.\n", "Each head independently attends\n", "based on a separate representation subspace of queries, keys, and values.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "520c51a5", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:50:42.931273Z", "iopub.status.busy": "2023-08-18T19:50:42.930241Z", "iopub.status.idle": "2023-08-18T19:50:44.499507Z", "shell.execute_reply": "2023-08-18T19:50:44.498627Z" }, "origin_pos": 75, "tab": [ "pytorch" ] }, "outputs": [ { "ename": "", "evalue": "", "output_type": "error", "traceback": [ "\u001b[1;31mnotebook controller is DISPOSED. \n", "\u001b[1;31mView Jupyter log for further details." ] }, { "ename": "", "evalue": "", "output_type": "error", "traceback": [ "\u001b[1;31mnotebook controller is DISPOSED. \n", "\u001b[1;31mView Jupyter log for further details." ] } ], "source": [ "d2l.show_heatmaps(\n", " enc_attention_weights.cpu(), xlabel='Key positions',\n", " ylabel='Query positions', titles=['Head %d' % i for i in range(1, 5)],\n", " figsize=(7, 3.5)) # Plot encoder self-attention patterns\n" ] }, { "cell_type": "markdown", "id": "89466194", "metadata": { "origin_pos": 76 }, "source": [ "[**To visualize the decoder self-attention weights and the encoder--decoder attention weights,\n", "we need more data manipulations.**]\n", "For example,\n", "we fill the masked attention weights with zero.\n", "Note that\n", "the decoder self-attention weights\n", "and the encoder--decoder attention weights\n", "both have the same queries:\n", "the beginning-of-sequence token followed by\n", "the output tokens and possibly\n", "end-of-sequence tokens.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "41d25d85", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:50:44.505810Z", "iopub.status.busy": "2023-08-18T19:50:44.505240Z", "iopub.status.idle": "2023-08-18T19:50:44.519144Z", "shell.execute_reply": "2023-08-18T19:50:44.517957Z" }, "origin_pos": 78, "tab": [ "pytorch" ] }, "outputs": [ { "ename": "", "evalue": "", "output_type": "error", "traceback": [ "\u001b[1;31mnotebook controller is DISPOSED. \n", "\u001b[1;31mView Jupyter log for further details." ] }, { "ename": "", "evalue": "", "output_type": "error", "traceback": [ "\u001b[1;31mnotebook controller is DISPOSED. \n", "\u001b[1;31mView Jupyter log for further details." ] } ], "source": [ "dec_attention_weights_2d = [head[0].tolist()\n", " for step in dec_attention_weights\n", " for attn in step for blk in attn for head in blk] # Flatten nested structure\n", "dec_attention_weights_filled = torch.tensor(\n", " pd.DataFrame(dec_attention_weights_2d).fillna(0.0).values) # Replace masked entries with zeros\n", "shape = (-1, 2, num_blks, num_heads, data.num_steps) # Separate self vs. cross attention dimensions\n", "dec_attention_weights = dec_attention_weights_filled.reshape(shape) # Restore structured tensor\n", "dec_self_attention_weights, dec_inter_attention_weights = dec_attention_weights.permute(1, 2, 3, 0, 4) # Move axes to (type, layers, heads, queries, keys)\n" ] }, { "cell_type": "code", "execution_count": null, "id": "2ddd5159", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:50:44.523889Z", "iopub.status.busy": "2023-08-18T19:50:44.523032Z", "iopub.status.idle": "2023-08-18T19:50:44.528387Z", "shell.execute_reply": "2023-08-18T19:50:44.527176Z" }, "origin_pos": 81, "tab": [ "pytorch" ] }, "outputs": [ { "ename": "", "evalue": "", "output_type": "error", "traceback": [ "\u001b[1;31mnotebook controller is DISPOSED. \n", "\u001b[1;31mView Jupyter log for further details." ] }, { "ename": "", "evalue": "", "output_type": "error", "traceback": [ "\u001b[1;31mnotebook controller is DISPOSED. \n", "\u001b[1;31mView Jupyter log for further details." ] } ], "source": [ "d2l.check_shape(dec_self_attention_weights,\n", " (num_blks, num_heads, data.num_steps, data.num_steps)) # Verify self-attention tensor shape\n", "d2l.check_shape(dec_inter_attention_weights,\n", " (num_blks, num_heads, data.num_steps, data.num_steps)) # Verify encoder-decoder tensor shape\n" ] }, { "cell_type": "markdown", "id": "544463d1", "metadata": { "origin_pos": 82 }, "source": [ "Because of the autoregressive property of the decoder self-attention,\n", "no query attends to key--value pairs after the query position.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "e90d27e7", "metadata": {}, "outputs": [ { "ename": "", "evalue": "", "output_type": "error", "traceback": [ "\u001b[1;31mnotebook controller is DISPOSED. \n", "\u001b[1;31mView Jupyter log for further details." ] }, { "ename": "", "evalue": "", "output_type": "error", "traceback": [ "\u001b[1;31mnotebook controller is DISPOSED. \n", "\u001b[1;31mView Jupyter log for further details." ] } ], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "8430c053", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:50:44.533271Z", "iopub.status.busy": "2023-08-18T19:50:44.532389Z", "iopub.status.idle": "2023-08-18T19:50:45.954406Z", "shell.execute_reply": "2023-08-18T19:50:45.953261Z" }, "origin_pos": 83, "tab": [ "pytorch" ] }, "outputs": [ { "ename": "", "evalue": "", "output_type": "error", "traceback": [ "\u001b[1;31mnotebook controller is DISPOSED. \n", "\u001b[1;31mView Jupyter log for further details." ] }, { "ename": "", "evalue": "", "output_type": "error", "traceback": [ "\u001b[1;31mnotebook controller is DISPOSED. \n", "\u001b[1;31mView Jupyter log for further details." ] } ], "source": [ "d2l.show_heatmaps(\n", " dec_self_attention_weights[:, :, :, :],\n", " xlabel='Key positions', ylabel='Query positions',\n", " titles=['Head %d' % i for i in range(1, 5)], figsize=(7, 3.5)) # Visualize decoder self-attention\n" ] }, { "cell_type": "markdown", "id": "34030fb3", "metadata": { "origin_pos": 84 }, "source": [ "Similar to the case in the encoder self-attention,\n", "via the specified valid length of the input sequence,\n", "[**no query from the output sequence\n", "attends to those padding tokens from the input sequence.**]\n" ] }, { "cell_type": "code", "execution_count": null, "id": "1c0b1dfe", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T19:50:45.958174Z", "iopub.status.busy": "2023-08-18T19:50:45.957587Z", "iopub.status.idle": "2023-08-18T19:50:47.397366Z", "shell.execute_reply": "2023-08-18T19:50:47.396481Z" }, "origin_pos": 85, "tab": [ "pytorch" ] }, "outputs": [ { "ename": "", "evalue": "", "output_type": "error", "traceback": [ "\u001b[1;31mnotebook controller is DISPOSED. \n", "\u001b[1;31mView Jupyter log for further details." ] }, { "ename": "", "evalue": "", "output_type": "error", "traceback": [ "\u001b[1;31mnotebook controller is DISPOSED. \n", "\u001b[1;31mView Jupyter log for further details." ] } ], "source": [ "d2l.show_heatmaps(\n", " dec_inter_attention_weights, xlabel='Key positions',\n", " ylabel='Query positions', titles=['Head %d' % i for i in range(1, 5)],\n", " figsize=(7, 3.5)) # Visualize encoder-decoder attention\n" ] }, { "cell_type": "code", "execution_count": null, "id": "05dbcaff", "metadata": {}, "outputs": [ { "ename": "", "evalue": "", "output_type": "error", "traceback": [ "\u001b[1;31mnotebook controller is DISPOSED. \n", "\u001b[1;31mView Jupyter log for further details." ] }, { "ename": "", "evalue": "", "output_type": "error", "traceback": [ "\u001b[1;31mnotebook controller is DISPOSED. \n", "\u001b[1;31mView Jupyter log for further details." ] } ], "source": [ "del model, data # remove large tensors\n", "import gc, torch\n", "gc.collect()\n", "torch.cuda.empty_cache()" ] }, { "cell_type": "markdown", "id": "a25ea19e", "metadata": { "origin_pos": 86 }, "source": [ "Although the Transformer architecture\n", "was originally proposed for sequence-to-sequence learning,\n", "as we will discover later in the book,\n", "either the Transformer encoder\n", "or the Transformer decoder\n", "is often individually used\n", "for different deep learning tasks.\n", "\n", "## Summary\n", "\n", "The Transformer is an instance of the encoder--decoder architecture,\n", "though either the encoder or the decoder can be used individually in practice.\n", "In the Transformer architecture, multi-head self-attention is used\n", "for representing the input sequence and the output sequence,\n", "though the decoder has to preserve the autoregressive property via a masked version.\n", "Both the residual connections and the layer normalization in the Transformer\n", "are important for training a very deep model.\n", "The positionwise feed-forward network in the Transformer model\n", "transforms the representation at all the sequence positions using the same MLP.\n", "\n", "\n", "## Exercises\n", "\n", "1. Train a deeper Transformer in the experiments. How does it affect the training speed and the translation performance?\n", "1. Is it a good idea to replace scaled dot product attention with additive attention in the Transformer? Why?\n", "1. For language modeling, should we use the Transformer encoder, decoder, or both? How would you design this method?\n", "1. What challenges can Transformers face if input sequences are very long? Why?\n", "1. How would you improve the computational and memory efficiency of Transformers? Hint: you may refer to the survey paper by :citet:`Tay.Dehghani.Bahri.ea.2020`.\n" ] }, { "cell_type": "markdown", "id": "97dcc07a", "metadata": { "origin_pos": 88, "tab": [ "pytorch" ] }, "source": [ "[Discussions](https://discuss.d2l.ai/t/1066)\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python_mac_d2l", "language": "python", "name": "d2l" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.14.0" }, "required_libs": [] }, "nbformat": 4, "nbformat_minor": 5 }